# Test-Time Computation
Rank1 0.5b
MIT
rank1 is an information retrieval reranking model trained on Qwen2.5-0.5B, improving relevance judgment accuracy through generated reasoning chains.
Large Language Model
Transformers English

R
jhu-clsp
21
0
Rank1 32b
MIT
rank1-32b is an information retrieval reranking model based on Qwen2.5-32B, which judges relevance by generating reasoning chains
Large Language Model
Transformers English

R
jhu-clsp
18
0
Rank1 7b
MIT
rank1-7b is a 7-billion-parameter reranking model trained on Qwen2.5-7B, performing relevance judgments through generated reasoning chains
Large Language Model
Transformers English

R
jhu-clsp
661
1
Featured Recommended AI Models